Pentagon grapples with securing AI as it moves toward autonomous warfare
NASHVILLE — As the Pentagon moves to fold artificial intelligence into the machinery of war, senior military leaders are confronting a less visible but equally consequential challenge: how to secure — and control — the software that may soon help make battlefield decisions.
Autonomous weapons are no longer a distant prospect, Chairman of the Joint Chiefs of Staff Gen. Dan Caine told an audience at Vanderbilt University’s Asness Summit on Modern Conflict and Emerging Threats. They are going to be a “key and essential part of everything we do,” he added.
His remarks made clear that the shift is not simply about deploying smarter drones or faster systems. It is also about building a digital infrastructure — from command-and-control networks to machine-learning models — that can be trusted under adversarial conditions. “We are doing a lot of thinking about this in the joint force right now,” he said, pointing to the growing role of autonomy in areas like targeting, logistics and battlefield coordination.
The urgency, he suggested, is compounded by a widening gap between the military and the private sector, where much of the most advanced AI development is taking place. “Probably everybody in this room uses some flavor of a [large language model] every single day,” he said. “So, we have to really normalize this and become early adopters.”
The Defense Department is increasingly dependent on privately-developed software systems that were not originally designed for military use — raising concerns about vulnerabilities, supply chain risks and the potential for adversaries to exploit them.
Those very concerns have been brought into sharp relief by a standoff with Anthropic, one of the country’s leading AI firms.
The company recently withheld public release of a powerful model, Mythos Preview, citing cybersecurity risks and concerns about how it could be misused if widely deployed. At the same time, intelligence agencies have shown interest in its capabilities; the National Security Agency has reportedly been granted access to the model.
Earlier this year, Anthropic declined to ease restrictions on how its systems could be used — including limits on domestic surveillance and fully autonomous weapons. The refusal sparked a very public kerfuffle in which the Pentagon designated the company a “supply chain risk,” a term typically applied to foreign vendors whose technology could introduce security vulnerabilities into government systems.
The White House followed with an order directing federal agencies to phase out the use of Anthropic’s tools. The company has since challenged the decision in court, and a federal judge temporarily blocked the ban in March. The government has said it intends to appeal.
In recent days President Donald Trump suggested the dispute may be easing, saying the company is “shaping up” and could “be of great use.”
The episode underscores a deeper and unresolved issue: The United States is racing to adopt AI for national security purposes while relying on a commercial ecosystem that does not always align with military priorities — particularly when it comes to risk tolerance and control.
For military planners, the concern is not only whether AI systems can make faster or better decisions, but whether those systems can be secured against manipulation, data poisoning or unintended behavior in high-stakes environments.
Those risks are no longer theoretical. Lawmakers have already pressed the Pentagon for answers about whether AI systems were used in a deadly strike on an Iranian school during the opening hours of the U.S.-Israel war against Iran, raising questions about how such tools are tested, audited and governed.
Gen. Caine also pointed to a more prosaic obstacle: the government’s own procurement system. “We have to write better contracts,” he said, arguing that current acquisition frameworks are ill-suited to software that evolves continuously and requires ongoing security updates. Traditional contracts, designed for fixed hardware systems, can slow the deployment of critical technologies and leave gaps in accountability.
In the context of AI, those gaps can have operational consequences — particularly if responsibility for failures or vulnerabilities is unclear. Caine said contracts should be structured to share risk between the government and private companies, with the goal of ensuring that systems are not only effective, but resilient.
As the Pentagon pushes deeper into AI-enabled warfare, the challenge is becoming less about whether the technology works — and more about whether it can be trusted, secured and controlled in a domain where the margin for error is vanishingly small.
Dina Temple-Raston
is the Host and Managing Editor of the Click Here podcast as well as a senior correspondent at Recorded Future News. She previously served on NPR’s Investigations team focusing on breaking news stories and national security, technology, and social justice and hosted and created the award-winning Audible Podcast “What Were You Thinking.”



